42 research outputs found
Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty
When using deep neural networks to operate safety-critical systems, assessing
the sensitivity of the network outputs when subject to uncertain inputs is of
paramount importance. Such assessment is commonly done using reachability
analysis or robustness certification. However, certification techniques
typically ignore localization information, while reachable set methods can fail
to issue robustness guarantees. Furthermore, many advanced methods are either
computationally intractable in practice or restricted to very specific models.
In this paper, we develop a data-driven optimization-based method capable of
simultaneously certifying the safety of network outputs and localizing them.
The proposed method provides a unified assessment framework, as it subsumes
state-of-the-art reachability analysis and robustness certification. The method
applies to deep neural networks of all sizes and structures, and to random
input uncertainty with a general distribution. We develop sufficient conditions
for the convexity of the underlying optimization, and for the number of data
samples to certify and localize the outputs with overwhelming probability. We
experimentally demonstrate the efficacy and tractability of the method on a
deep ReLU network
Projected Randomized Smoothing for Certified Adversarial Robustness
Randomized smoothing is the current state-of-the-art method for producing
provably robust classifiers. While randomized smoothing typically yields robust
-ball certificates, recent research has generalized provable robustness
to different norm balls as well as anisotropic regions. This work considers a
classifier architecture that first projects onto a low-dimensional
approximation of the data manifold and then applies a standard classifier. By
performing randomized smoothing in the low-dimensional projected space, we
characterize the certified region of our smoothed composite classifier back in
the high-dimensional input space and prove a tractable lower bound on its
volume. We show experimentally on CIFAR-10 and SVHN that classifiers without
the initial projection are vulnerable to perturbations that are normal to the
data manifold and yet are captured by the certified regions of our method. We
compare the volume of our certified regions against various baselines and show
that our method improves on the state-of-the-art by many orders of magnitude.Comment: Transactions on Machine Learning Research (TMLR) 202
Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing
While prior research has proposed a plethora of methods that enhance the
adversarial robustness of neural classifiers, practitioners are still reluctant
to adopt these techniques due to their unacceptably severe penalties in clean
accuracy. This paper shows that by mixing the output probabilities of a
standard classifier and a robust model, where the standard network is optimized
for clean accuracy and is not robust in general, this accuracy-robustness
trade-off can be significantly alleviated. We show that the robust base
classifier's confidence difference for correct and incorrect examples is the
key ingredient of this improvement. In addition to providing intuitive and
empirical evidence, we also theoretically certify the robustness of the mixed
classifier under realistic assumptions. Furthermore, we adapt an adversarial
input detector into a mixing network that adaptively adjusts the mixture of the
two base models, further reducing the accuracy penalty of achieving robustness.
The proposed flexible method, termed "adaptive smoothing", can work in
conjunction with existing or even future methods that improve clean accuracy,
robustness, or adversary detection. Our empirical evaluation considers strong
attack methods, including AutoAttack and adaptive attack. On the CIFAR-100
dataset, our method achieves an 85.21% clean accuracy while maintaining a
38.72% -AutoAttacked (=8/255) accuracy, becoming the
second most robust method on the RobustBench CIFAR-100 benchmark as of
submission, while improving the clean accuracy by ten percentage points
compared with all listed models. The code that implements our method is
available at https://github.com/Bai-YT/AdaptiveSmoothing
Asymmetric Certified Robustness via Feature-Convex Neural Networks
Recent works have introduced input-convex neural networks (ICNNs) as learning
models with advantageous training, inference, and generalization properties
linked to their convex structure. In this paper, we propose a novel
feature-convex neural network architecture as the composition of an ICNN with a
Lipschitz feature map in order to achieve adversarial robustness. We consider
the asymmetric binary classification setting with one "sensitive" class, and
for this class we prove deterministic, closed-form, and easily-computable
certified robust radii for arbitrary -norms. We theoretically justify
the use of these models by characterizing their decision region geometry,
extending the universal approximation theorem for ICNN regression to the
classification setting, and proving a lower bound on the probability that such
models perfectly fit even unstructured uniformly distributed data in
sufficiently high dimensions. Experiments on Malimg malware classification and
subsets of MNIST, Fashion-MNIST, and CIFAR-10 datasets show that feature-convex
classifiers attain state-of-the-art certified -radii as well as
substantial - and -radii while being far more
computationally efficient than any competitive baseline.Comment: 37th Conference on Neural Information Processing Systems (NeurIPS
2023
Quantitative Assessment of Robotic Swarm Coverage
This paper studies a generally applicable, sensitive, and intuitive error
metric for the assessment of robotic swarm density controller performance.
Inspired by vortex blob numerical methods, it overcomes the shortcomings of a
common strategy based on discretization, and unifies other continuous notions
of coverage. We present two benchmarks against which to compare the error
metric value of a given swarm configuration: non-trivial bounds on the error
metric, and the probability density function of the error metric when robot
positions are sampled at random from the target swarm distribution. We give
rigorous results that this probability density function of the error metric
obeys a central limit theorem, allowing for more efficient numerical
approximation. For both of these benchmarks, we present supporting theory,
computation methodology, examples, and MATLAB implementation code.Comment: Proceedings of the 15th International Conference on Informatics in
Control, Automation and Robotics (ICINCO), Porto, Portugal, 29--31 July 2018.
11 pages, 4 figure
National Athletic Trainers' Association Position Statement: Preventing Sudden Death in Sports
To present recommendations for the prevention and screening, recognition, and treatment of the most common conditions resulting in sudden death in organized sports
Robust SARS-CoV-2 TÂ cell responses with common TCR?? motifs toward COVID-19 vaccines in patients with hematological malignancy impacting B cells
Immunocompromised hematology patients are vulnerable to severe COVID-19 and respond poorly to vaccination. Relative deficits in immunity are, however, unclear, especially after 3 vaccine doses. We evaluated immune responses in hematology patients across three COVID-19 vaccination doses. Seropositivity was low after a first dose of BNT162b2 and ChAdOx1 (∼26%), increased to 59%–75% after a second dose, and increased to 85% after a third dose. While prototypical antibody-secreting cells (ASCs) and T follicular helper (Tfh) cell responses were elicited in healthy participants, hematology patients showed prolonged ASCs and skewed Tfh2/17 responses. Importantly, vaccine-induced expansions of spike-specific and peptide-HLA tetramer-specific CD4+/CD8+ T cells, together with their T cell receptor (TCR) repertoires, were robust in hematology patients, irrespective of B cell numbers, and comparable to healthy participants. Vaccinated patients with breakthrough infections developed higher antibody responses, while T cell responses were comparable to healthy groups. COVID-19 vaccination induces robust T cell immunity in hematology patients of varying diseases and treatments irrespective of B cell numbers and antibody response